首页> 外文OA文献 >A kernel-induced space selection approach to model selection of KLDA
【2h】

A kernel-induced space selection approach to model selection of KLDA

机译:KLDA模型选择的核诱导空间选择方法

代理获取
本网站仅为用户提供外文OA文献查询和代理获取服务,本网站没有原文。下单后我们将采用程序或人工为您竭诚获取高质量的原文,但由于OA文献来源多样且变更频繁,仍可能出现获取不到、文献不完整或与标题不符等情况,如果获取不到我们将提供退款服务。请知悉。

摘要

Model selection in kernel linear discriminant analysis (KLDA) refers to the selection of appropriate parameters of a kernel function and the regularizer. By following the principle of maximum information preservation, this paper formulates the model selection problem as a problem of selecting an optimal kernel-induced space in which different classes are maximally separated from each other. A scatter-matrix-based criterion is developed to measure the \u22goodness\u22 of a kernel-induced space, and the kernel parameters are tuned by maximizing this criterion. This criterion is computationally efficient and is differentiable with respect to the kernel parameters. Compared with the leave-one-out (LOO) or -fold cross validation (CV), the proposed approach can achieve a faster model selection, especially when the number of training samples is large or when many kernel parameters need to be tuned. To tune the regularization parameter in the KLDA, our criterion is used together with the method proposed by Saadi et al. (2004). Experiments on benchmark data sets verify the effectiveness of this model selection approach.
机译:核线性判别分析(KLDA)中的模型选择是指选择核函数和正则化函数的适当参数。通过遵循最大信息保留的原理,本文将模型选择问题表述为选择一个最佳的核诱导空间的问题,在该空间中,不同类别之间的最大距离是最大的。提出了一种基于散射矩阵的准则来测量核诱导空间的\ u22goodness \ u22,并通过最大化该准则来调整内核参数。该标准在计算上是有效的,并且相对于内核参数是可区分的。与留一法(LOO)或多重交叉验证(CV)相比,该方法可以实现更快的模型选择,尤其是在训练样本数量很大或需要调整许多内核参数时。为了调整KLDA中的正则化参数,我们的标准与Saadi等人提出的方法一起使用。 (2004)。在基准数据集上进行的实验验证了该模型选择方法的有效性。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
代理获取

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号